Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 315.056
Filtrar
1.
Rev. esp. patol ; 57(2): 77-83, Abr-Jun, 2024. tab, ilus
Artículo en Español | IBECS | ID: ibc-232410

RESUMEN

Introducción: En un servicio de anatomía patológica se analiza la carga laboral en tiempo médico en función de la complejidad de las muestras recibidas, y se valora su distribución entre los patólogos, presentado un nuevo algoritmo informático que favorece una distribución equitativa. Métodos: Siguiendo las directrices para la «Estimación de la carga de trabajo en citopatología e histopatología (tiempo médico) atendiendo al catálogo de muestras y procedimientos de la SEAP-IAP (2.ª edición)» se determinan las unidades de carga laboral (UCL) por patólogo y UCL global del servicio, la carga media laboral que soporta el servicio (factor MU), el tiempo de dedicación de cada patólogo a la actividad asistencial y el número de patólogos óptimo según la carga laboral del servicio. Resultados: Determinamos 12.197 UCL totales anuales para el patólogo jefe de servicio, así como 14.702 y 13.842 para los patólogos adjuntos, con una UCL global del servicio de 40.742. El factor MU calculado es 4,97. El jefe ha dedicado el 72,25% de su jornada a la asistencia y los adjuntos el 87,09 y 82,01%. El número de patólogos óptimo para el servicio es de 3,55. Conclusiones: Todos los resultados obtenidos demuestran la sobrecarga laboral médica, y la distribución de las UCL entre los patólogos no resulta equitativa. Se propone un algoritmo informático capaz de distribuir la carga laboral de manera equitativa, asociado al sistema de información del laboratorio, y que tenga en cuenta el tipo de muestra, su complejidad y la dedicación asistencial de cada patólogo.(AU)


Introduction: In a pathological anatomy service, the workload in medical time is analyzed based on the complexity of the samples received and its distribution among pathologists is assessed, presenting a new computer algorithm that favors an equitable distribution. Methods: Following the second edition of the Spanish guidelines for the estimation of workload in cytopathology and histopathology (medical time) according to the Spanish Pathology Society-International Academy of Pathology (SEAP-IAP) catalog of samples and procedures, we determined the workload units (UCL) per pathologist and the overall UCL of the service, the average workload of the service (MU factor), the time dedicated by each pathologist to healthcare activity and the optimal number of pathologists according to the workload of the service. Results: We determined 12 197 total annual UCL for the chief pathologist, as well as 14 702 and 13 842 UCL for associate pathologists, with an overall of 40 742 UCL for the whole service. The calculated MU factor is 4.97. The chief pathologist devoted 72.25% of his working day to healthcare activity while associate pathologists dedicated 87.09% and 82.01% of their working hours. The optimal number of pathologists for the service is found to be 3.55. Conclusions: The results demonstrate medical work overload and a non-equitable distribution of UCLs among pathologists. We propose a computer algorithm capable of distributing the workload in an equitable manner. It would be associated with the laboratory information system and take into account the type of specimen, its complexity and the dedication of each pathologist to healthcare activity.(AU)


Asunto(s)
Humanos , Masculino , Femenino , Patología , Carga de Trabajo , Patólogos , Servicio de Patología en Hospital , Algoritmos
2.
PeerJ ; 12: e17184, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38560451

RESUMEN

Background: Single-cell annotation plays a crucial role in the analysis of single-cell genomics data. Despite the existence of numerous single-cell annotation algorithms, a comprehensive tool for integrating and comparing these algorithms is also lacking. Methods: This study meticulously investigated a plethora of widely adopted single-cell annotation algorithms. Ten single-cell annotation algorithms were selected based on the classification of either reference dataset-dependent or marker gene-dependent approaches. These algorithms included SingleR, Seurat, sciBet, scmap, CHETAH, scSorter, sc.type, cellID, scCATCH, and SCINA. Building upon these algorithms, we developed an R package named scAnnoX for the integration and comparative analysis of single-cell annotation algorithms. Results: The development of the scAnnoX software package provides a cohesive framework for annotating cells in scRNA-seq data, enabling researchers to more efficiently perform comparative analyses among the cell type annotations contained in scRNA-seq datasets. The integrated environment of scAnnoX streamlines the testing, evaluation, and comparison processes among various algorithms. Among the ten annotation tools evaluated, SingleR, Seurat, sciBet, and scSorter emerged as top-performing algorithms in terms of prediction accuracy, with SingleR and sciBet demonstrating particularly superior performance, offering guidance for users. Interested parties can access the scAnnoX package at https://github.com/XQ-hub/scAnnoX.


Asunto(s)
Análisis de la Célula Individual , Programas Informáticos , Algoritmos , Genómica , Existencialismo
3.
J Immunol ; 212(8): 1255, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38560812
4.
Front Endocrinol (Lausanne) ; 15: 1376220, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38562414

RESUMEN

Background: Identification of patients at risk for type 2 diabetes mellitus (T2DM) can not only prevent complications and reduce suffering but also ease the health care burden. While routine physical examination can provide useful information for diagnosis, manual exploration of routine physical examination records is not feasible due to the high prevalence of T2DM. Objectives: We aim to build interpretable machine learning models for T2DM diagnosis and uncover important diagnostic indicators from physical examination, including age- and sex-related indicators. Methods: In this study, we present three weighted diversity density (WDD)-based algorithms for T2DM screening that use physical examination indicators, the algorithms are highly transparent and interpretable, two of which are missing value tolerant algorithms. Patients: Regarding the dataset, we collected 43 physical examination indicator data from 11,071 cases of T2DM patients and 126,622 healthy controls at the Affiliated Hospital of Southwest Medical University. After data processing, we used a data matrix containing 16004 EHRs and 43 clinical indicators for modelling. Results: The indicators were ranked according to their model weights, and the top 25% of indicators were found to be directly or indirectly related to T2DM. We further investigated the clinical characteristics of different age and sex groups, and found that the algorithms can detect relevant indicators specific to these groups. The algorithms performed well in T2DM screening, with the highest area under the receiver operating characteristic curve (AUC) reaching 0.9185. Conclusion: This work utilized the interpretable WDD-based algorithms to construct T2DM diagnostic models based on physical examination indicators. By modeling data grouped by age and sex, we identified several predictive markers related to age and sex, uncovering characteristic differences among various groups of T2DM patients.


Asunto(s)
Diabetes Mellitus Tipo 2 , Humanos , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Aprendizaje Automático , Algoritmos , Curva ROC , Biomarcadores
5.
J Med Syst ; 48(1): 37, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38564061

RESUMEN

Computed tomography perfusion (CTP) is a dynamic 4-dimensional imaging technique (3-dimensional volumes captured over approximately 1 min) in which cerebral blood flow is quantified by tracking the passage of a bolus of intravenous contrast with serial imaging of the brain. To diagnose and assess acute ischemic stroke, the standard method relies on summarizing acquired CTPs over the time axis to create maps that show different hemodynamic parameters, such as the timing of the bolus arrival and passage (Tmax and MTT), cerebral blood flow (CBF), and cerebral blood volume (CBV). However, producing accurate CTP maps requires the selection of an arterial input function (AIF), i.e. a time-concentration curve in one of the large feeding arteries of the brain, which is a highly error-prone procedure. Moreover, during approximately one minute of CT scanning, the brain is exposed to ionizing radiation that can alter tissue composition, and create free radicals that increase the risk of cancer. This paper proposes a novel end-to-end deep neural network that synthesizes CTP images to generate CTP maps using a learned LSTM Generative Adversarial Network (LSTM-GAN). Our proposed method can improve the precision and generalizability of CTP map extraction by eliminating the error-prone and expert-dependent AIF selection step. Further, our LSTM-GAN does not require the entire CTP time series and can produce CTP maps with a reduced number of time points. By reducing the scanning sequence from about 40 to 9 time points, the proposed method has the potential to minimize scanning time thereby reducing patient exposure to CT radiation. Our evaluations using the ISLES 2018 challenge dataset consisting of 63 patients showed that our model can generate CTP maps by using only 9 snapshots, without AIF selection, with an accuracy of 84.37 % .


Asunto(s)
Accidente Cerebrovascular Isquémico , Humanos , Aprendizaje , Encéfalo/diagnóstico por imagen , Algoritmos , Perfusión
6.
Environ Monit Assess ; 196(5): 411, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38564123

RESUMEN

Spatial simulation and projection of ecosystem services value (ESV) changes caused by urban growth are important for sustainable development in arid regions. We developed a new model of cellular automata based grasshopper optimization algorithm (named GOA-CA) for simulating urban growth patterns and assessing the impacts of urban growth on ESV changes under climate change scenarios. The results show that GOA-CA yielded overall accuracy exceeding 98%, and FOM for 2010 and 2020 were 43.2% and 38.1%, respectively, indicating the effectiveness of the model. The prairie lost the highest economic ESVs (192 million USD) and the coniferous yielded the largest economic ESV increase (292 million USD) during 2000-2020. Using climate change scenarios as urban future land use demands, we projected three scenarios of the urban growth of Urumqi for 2050 and their impacts on ESV. Our model can be easily applied to simulating urban development, analyzing its impact on ESV and projecting future scenarios in global arid regions.


Asunto(s)
Cambio Climático , Ecosistema , Monitoreo del Ambiente , Algoritmos , Clima Desértico
7.
BMC Palliat Care ; 23(1): 83, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38556869

RESUMEN

BACKGROUND: Due to limited numbers of palliative care specialists and/or resources, accessing palliative care remains limited in many low and middle-income countries. Data science methods, such as rule-based algorithms and text mining, have potential to improve palliative care by facilitating analysis of electronic healthcare records. This study aimed to develop and evaluate a rule-based algorithm for identifying cancer patients who may benefit from palliative care based on the Thai version of the Supportive and Palliative Care Indicators for a Low-Income Setting (SPICT-LIS) criteria. METHODS: The medical records of 14,363 cancer patients aged 18 years and older, diagnosed between 2016 and 2020 at Songklanagarind Hospital, were analyzed. Two rule-based algorithms, strict and relaxed, were designed to identify key SPICT-LIS indicators in the electronic medical records using tokenization and sentiment analysis. The inter-rater reliability between these two algorithms and palliative care physicians was assessed using percentage agreement and Cohen's kappa coefficient. Additionally, factors associated with patients might be given palliative care as they will benefit from it were examined. RESULTS: The strict rule-based algorithm demonstrated a high degree of accuracy, with 95% agreement and Cohen's kappa coefficient of 0.83. In contrast, the relaxed rule-based algorithm demonstrated a lower agreement (71% agreement and Cohen's kappa of 0.16). Advanced-stage cancer with symptoms such as pain, dyspnea, edema, delirium, xerostomia, and anorexia were identified as significant predictors of potentially benefiting from palliative care. CONCLUSION: The integration of rule-based algorithms with electronic medical records offers a promising method for enhancing the timely and accurate identification of patients with cancer might benefit from palliative care.


Asunto(s)
Neoplasias , Cuidados Paliativos , Humanos , Reproducibilidad de los Resultados , Registros Electrónicos de Salud , Neoplasias/terapia , Minería de Datos , Algoritmos
8.
PLoS One ; 19(4): e0299888, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38564622

RESUMEN

While the musical instrument classification task is well-studied, there remains a gap in identifying non-pitched percussion instruments which have greater overlaps in frequency bands and variation in sound quality and play style than pitched instruments. In this paper, we present a musical instrument classifier for detecting tambourines, maracas and castanets, instruments that are often used in early childhood music education. We generated a dataset with diverse instruments (e.g., brand, materials, construction) played in different locations with varying background noise and play styles. We conducted sensitivity analyses to optimize feature selection, windowing time, and model selection. We deployed and evaluated our best model in a mixed reality music application with 12 families in a home setting. Our dataset was comprised of over 369,000 samples recorded in-lab and 35,361 samples recorded with families in a home setting. We observed the Light Gradient Boosting Machine (LGBM) model to perform best using an approximate 93 ms window with only 12 mel-frequency cepstral coefficients (MFCCs) and signal entropy. Our best LGBM model was observed to perform with over 84% accuracy across all three instrument families in-lab and over 73% accuracy when deployed to the home. To our knowledge, the dataset compiled of 369,000 samples of non-pitched instruments is first of its kind. This work also suggests that a low feature space is sufficient for the recognition of non-pitched instruments. Lastly, real-world deployment and testing of the algorithms created with participants of diverse physical and cognitive abilities was also an important contribution towards more inclusive design practices. This paper lays the technological groundwork for a mixed reality music application that can detect children's use of non-pitched, percussion instruments to support early childhood music education and play.


Asunto(s)
Música , Percusión , Niño , Humanos , Preescolar , Sonido , Algoritmos , Cognición
9.
NPJ Syst Biol Appl ; 10(1): 34, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38565568

RESUMEN

Minimal Cut Sets (MCSs) identify sets of reactions which, when removed from a metabolic network, disable certain cellular functions. The traditional search for MCSs within genome-scale metabolic models (GSMMs) targets cellular growth, identifies reaction sets resulting in a lethal phenotype if disrupted, and retrieves a list of corresponding gene, mRNA, or enzyme targets. Using the dual link between MCSs and Elementary Flux Modes (EFMs), our logic programming-based tool aspefm was able to compute MCSs of any size from GSMMs in acceptable run times. The tool demonstrated better performance when computing large-sized MCSs than the mixed-integer linear programming methods. We applied the new MCSs methodology to a medically-relevant consortium model of two cross-feeding bacteria, Staphylococcus aureus and Pseudomonas aeruginosa. aspefm constraints were used to bias the computation of MCSs toward exchanged metabolites that could complement lethal phenotypes in individual species. We found that interspecies metabolite exchanges could play an essential role in rescuing single-species growth, for instance inosine could complement lethal reaction knock-outs in the purine synthesis, glycolysis, and pentose phosphate pathways of both bacteria. Finally, MCSs were used to derive a list of promising enzyme targets for consortium-level therapeutic applications that cannot be circumvented via interspecies metabolite exchange.


Asunto(s)
Algoritmos , Infección de Heridas , Humanos , Modelos Biológicos , Redes y Vías Metabólicas/genética , Genoma
10.
Sci Rep ; 14(1): 7691, 2024 04 02.
Artículo en Inglés | MEDLINE | ID: mdl-38565845

RESUMEN

Spinal cord injury (SCI) is a prevalent and serious complication among patients with spinal tuberculosis (STB) that can lead to motor and sensory impairment and potentially paraplegia. This research aims to identify factors associated with SCI in STB patients and to develop a clinically significant predictive model. Clinical data from STB patients at a single hospital were collected and divided into training and validation sets. Univariate analysis was employed to screen clinical indicators in the training set. Multiple machine learning (ML) algorithms were utilized to establish predictive models. Model performance was evaluated and compared using receiver operating characteristic (ROC) curves, area under the curve (AUC), calibration curve analysis, decision curve analysis (DCA), and precision-recall (PR) curves. The optimal model was determined, and a prospective cohort from two other hospitals served as a testing set to assess its accuracy. Model interpretation and variable importance ranking were conducted using the DALEX R package. The model was deployed on the web by using the Shiny app. Ten clinical characteristics were utilized for the model. The random forest (RF) model emerged as the optimal choice based on the AUC, PRs, calibration curve analysis, and DCA, achieving a test set AUC of 0.816. Additionally, MONO was identified as the primary predictor of SCI in STB patients through variable importance ranking. The RF predictive model provides an efficient and swift approach for predicting SCI in STB patients.


Asunto(s)
Traumatismos de la Médula Espinal , Tuberculosis de la Columna Vertebral , Humanos , Estudios Prospectivos , Tuberculosis de la Columna Vertebral/complicaciones , Traumatismos de la Médula Espinal/complicaciones , Algoritmos , Aprendizaje Automático , Estudios Retrospectivos
11.
Methods Mol Biol ; 2797: 67-90, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38570453

RESUMEN

Molecular docking is a popular computational tool in drug discovery. Leveraging structural information, docking software predicts binding poses of small molecules to cavities on the surfaces of proteins. Virtual screening for ligand discovery is a useful application of docking software. In this chapter, using the enigmatic KRAS protein as an example system, we endeavor to teach the reader about best practices for performing molecular docking with UCSF DOCK. We discuss methods for virtual screening and docking molecules on KRAS. We present the following six points to optimize our docking setup for prosecuting a virtual screen: protein structure choice, pocket selection, optimization of the scoring function, modification of sampling spheres and sampling procedures, choosing an appropriate portion of chemical space to dock, and the choice of which top scoring molecules to pick for purchase.


Asunto(s)
Algoritmos , Proteínas Proto-Oncogénicas p21(ras) , Simulación del Acoplamiento Molecular , Proteínas Proto-Oncogénicas p21(ras)/genética , Proteínas Proto-Oncogénicas p21(ras)/metabolismo , Programas Informáticos , Proteínas/química , Descubrimiento de Drogas , Ligandos , Unión Proteica , Sitios de Unión
12.
Exp Dermatol ; 33(4): e15070, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38570935

RESUMEN

Cutaneous melanoma poses a formidable challenge within the field of oncology, marked by its aggressive nature and capacity for metastasis. Despite extensive research uncovering numerous genetic and molecular contributors to cutaneous melanoma development, there remains a critical knowledge gap concerning the role of lipids, notably low-density lipoprotein (LDL), in this lethal skin cancer. This article endeavours to bridge this knowledge gap by delving into the intricate interplay between LDL metabolism and cutaneous melanoma, shedding light on how lipids influence tumour progression, immune responses and potential therapeutic avenues. Genes associated with LDL metabolism were extracted from the GSEA database. We acquired and analysed single-cell sequencing data (GSE215120) and bulk-RNA sequencing data, including the TCGA data set, GSE19234, GSE22153 and GSE65904. Our analysis unveiled the heterogeneity of LDL across various cell types at the single-cell sequencing level. Additionally, we constructed an LDL-related signature (LRS) using machine learning algorithms, incorporating differentially expressed genes and highly correlated genes. The LRS serves as a valuable tool for assessing the prognosis, immunity and mutation status of patients with cutaneous melanoma. Furthermore, we conducted experiments on A375 and WM-115 cells to validate the function of PPP2R1A, a pivotal gene within the LRS. Our comprehensive approach, combining advanced bioinformatics analyses with an extensive review of current literature, presents compelling evidence regarding the significance of LDL within the cutaneous melanoma microenvironment.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Humanos , Melanoma/genética , Neoplasias Cutáneas/genética , Pronóstico , Algoritmos , Aprendizaje Automático , Perfilación de la Expresión Génica , Lípidos , Microambiente Tumoral/genética
13.
Opt Express ; 32(7): 11934-11951, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38571030

RESUMEN

Optical coherence tomography (OCT) can resolve biological three-dimensional tissue structures, but it is inevitably plagued by speckle noise that degrades image quality and obscures biological structure. Recently unsupervised deep learning methods are becoming more popular in OCT despeckling but they still have to use unpaired noisy-clean images or paired noisy-noisy images. To address the above problem, we propose what we believe to be a novel unsupervised deep learning method for OCT despeckling, termed Double-free Net, which eliminates the need for ground truth data and repeated scanning by sub-sampling noisy images and synthesizing noisier images. In comparison to existing unsupervised methods, Double-free Net obtains superior denoising performance when trained on datasets comprising retinal and human tissue images without clean images. The efficacy of Double-free Net in denoising holds significant promise for diagnostic applications in retinal pathologies and enhances the accuracy of retinal layer segmentation. Results demonstrate that Double-free Net outperforms state-of-the-art methods and exhibits strong convenience and adaptability across different OCT images.


Asunto(s)
Algoritmos , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Retina/diagnóstico por imagen , Cintigrafía , Procesamiento de Imagen Asistido por Computador/métodos
14.
Kardiologiia ; 64(3): 63-71, 2024 Mar 31.
Artículo en Ruso | MEDLINE | ID: mdl-38597764

RESUMEN

This review addresses the capabilities of stress EchoCG as a simple, non-invasive, non-radiation method for diagnosing occult disorders of coronary blood flow in patients with non-ST-elevation acute coronary syndrome on a low-risk electrocardiogram. The capabilities of the enhanced stress EchoCG protocol are based on supplementing the standard detection of transient disturbances of local contractility, generally associated with coronary artery obstruction, with an assessment of the heart rate reserve, coronary reserve and other parameters. This approach is considered promising for a more complete characterization of heart function during exercise and an accurate prognosis of the clinical case, which allows determining the tactics for patient management not limited to selection for myocardial revascularization.


Asunto(s)
Síndrome Coronario Agudo , Oclusión Coronaria , Humanos , Síndrome Coronario Agudo/diagnóstico por imagen , Ecocardiografía de Estrés , Corazón , Algoritmos
15.
Arkh Patol ; 86(2): 65-71, 2024.
Artículo en Ruso | MEDLINE | ID: mdl-38591909

RESUMEN

The review presents key concepts and global developments in the field of artificial intelligence used in pathological anatomy. The work examines two types of artificial intelligence (AI): weak and strong ones. A review of experimental algorithms using both deep machine learning and computer vision technologies to work with WSI images of preparations, diagnose and make a prognosis for various malignant neoplasms is carried out. It has been established that weak artificial intelligence at this stage of development of computer (digital) pathological anatomy shows significantly better results in speeding up and refining diagnostic procedures than strong artificial intelligence having signs of general intelligence. The article also discusses three options for the further development of AI assistants for pathologists based on the technologies of large language models (strong AI) ChatGPT (PathAsst), Flan-PaLM2 and LIMA. As a result of the analysis of the literature, key problems in the field were identified: the equipment of pathology institutions, the lack of experts in training neural networks, the lack of strict criteria for the clinical viability of AI diagnostic technologies.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Algoritmos , Aprendizaje Automático
16.
J Neural Eng ; 21(2)2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38592090

RESUMEN

Objective.The extended infomax algorithm for independent component analysis (ICA) can separate sub- and super-Gaussian signals but converges slowly as it uses stochastic gradient optimization. In this paper, an improved extended infomax algorithm is presented that converges much faster.Approach.Accelerated convergence is achieved by replacing the natural gradient learning rule of extended infomax by a fully-multiplicative orthogonal-group based update scheme of the ICA unmixing matrix, leading to an orthogonal extended infomax algorithm (OgExtInf). The computational performance of OgExtInf was compared with original extended infomax and with two fast ICA algorithms: the popular FastICA and Picard, a preconditioned limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm belonging to the family of quasi-Newton methods.Main results.OgExtInf converges much faster than original extended infomax. For small-size electroencephalogram (EEG) data segments, as used for example in online EEG processing, OgExtInf is also faster than FastICA and Picard.Significance.OgExtInf may be useful for fast and reliable ICA, e.g. in online systems for epileptic spike and seizure detection or brain-computer interfaces.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Electroencefalografía , Aprendizaje , Distribución Normal
17.
Adv Tech Stand Neurosurg ; 50: 201-229, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38592532

RESUMEN

INTRODUCTION: Due to the constant development of the technique, in the last 30 years, the endovascular treatment of the intracranial aneurysms (IAs) has gradually superseded the traditional surgery in the majority of centers. However, clipping still represents the best treatment for some anterior circulation IAs according to their angioarchitectural, topographical, and hemodynamic characteristics. Thus, the identification of residual indications for clipping and the maintenance of training programs in vascular neurosurgery appear nowadays more important than ever. MATERIALS AND METHODS: We reviewed our last 10-year institutional experience of ruptured and unruptured IAs clipping. We appraised in detail all technical refinements we adopted during this time span and analyzed the difficulties we met in teaching the aneurysm clipping technique to residents and fellows. Then, we described the algorithm of safety rules we used to teach young neurosurgeons how to surgical approach anterior circulation IAs and develop a procedural memory, which may intervene in all emergency situations. RESULTS: We identified seven pragmatic technical key points for clipping of the most frequent anterior circulation IAs and constructed a didactic approach to teach young cerebrovascular surgeons. In general, they concern craniotomy; cisternostomy; obtaining proximal control; cranial nerve, perforator, and vein preservation; necessity of specific corticectomy; aneurysm neck dissection; and clipping. CONCLUSION: In the setting of an IA clipping, particularly when ruptured, the young cerebrovascular surgeon needs to respect an algorithm of safety rules, which are essential not only to avoid major complications, but they may intervene during the difficulties helping to manage potentially life-tethering conditions.


Asunto(s)
Aneurisma Intracraneal , Cirujanos , Humanos , Aneurisma Intracraneal/cirugía , Neurocirujanos , Procedimientos Neuroquirúrgicos , Algoritmos
18.
Sci Rep ; 14(1): 8071, 2024 04 05.
Artículo en Inglés | MEDLINE | ID: mdl-38580700

RESUMEN

Over recent years, researchers and practitioners have encountered massive and continuous improvements in the computational resources available for their use. This allowed the use of resource-hungry Machine learning (ML) algorithms to become feasible and practical. Moreover, several advanced techniques are being used to boost the performance of such algorithms even further, which include various transfer learning techniques, data augmentation, and feature concatenation. Normally, the use of these advanced techniques highly depends on the size and nature of the dataset being used. In the case of fine-grained medical image sets, which have subcategories within the main categories in the image set, there is a need to find the combination of the techniques that work the best on these types of images. In this work, we utilize these advanced techniques to find the best combinations to build a state-of-the-art lumber disc herniation computer-aided diagnosis system. We have evaluated the system extensively and the results show that the diagnosis system achieves an accuracy of 98% when it is compared with human diagnosis.


Asunto(s)
Desplazamiento del Disco Intervertebral , Humanos , Desplazamiento del Disco Intervertebral/diagnóstico por imagen , Diagnóstico por Computador/métodos , Algoritmos , Aprendizaje Automático , Computadores
19.
Sci Rep ; 14(1): 8106, 2024 04 06.
Artículo en Inglés | MEDLINE | ID: mdl-38582913

RESUMEN

Wheat head detection and counting using deep learning techniques has gained considerable attention in precision agriculture applications such as wheat growth monitoring, yield estimation, and resource allocation. However, the accurate detection of small and dense wheat heads remains challenging due to the inherent variations in their size, orientation, appearance, aspect ratios, density, and the complexity of imaging conditions. To address these challenges, we propose a novel approach called the Oriented Feature Pyramid Network (OFPN) that focuses on detecting rotated wheat heads by utilizing oriented bounding boxes. In order to facilitate the development and evaluation of our proposed method, we introduce a novel dataset named the Rotated Global Wheat Head Dataset (RGWHD). This dataset is constructed by manually annotating images from the Global Wheat Head Detection (GWHD) dataset with oriented bounding boxes. Furthermore, we incorporate a Path-aggregation and Balanced Feature Pyramid Network into our architecture to effectively extract both semantic and positional information from the input images. This is achieved by leveraging feature fusion techniques at multiple scales, enhancing the detection capabilities for small wheat heads. To improve the localization and detection accuracy of dense and overlapping wheat heads, we employ the Soft-NMS algorithm to filter the proposed bounding boxes. Experimental results indicate the superior performance of the OFPN model, achieving a remarkable mean average precision of 85.77% in oriented wheat head detection, surpassing six other state-of-the-art models. Moreover, we observe a substantial improvement in the accuracy of wheat head counting, with an accuracy of 93.97%. This represents an increase of 3.12% compared to the Faster R-CNN method. Both qualitative and quantitative results demonstrate the effectiveness of the proposed OFPN model in accurately localizing and counting wheat heads within various challenging scenarios.


Asunto(s)
Agricultura , Triticum , Algoritmos , Tractos Piramidales , Asignación de Recursos
20.
Nat Commun ; 15(1): 3047, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589369

RESUMEN

Clustering biological sequences into similar groups is an increasingly important task as the number of available sequences continues to grow exponentially. Search-based approaches to clustering scale super-linearly with the number of input sequences, making it impractical to cluster very large sets of sequences. Approaches to clustering sequences in linear time currently lack the accuracy of super-linear approaches. Here, I set out to develop and characterize a strategy for clustering with linear time complexity that retains the accuracy of less scalable approaches. The resulting algorithm, named Clusterize, sorts sequences by relatedness to linearize the clustering problem. Clusterize produces clusters with accuracy rivaling popular programs (CD-HIT, MMseqs2, and UCLUST) but exhibits linear asymptotic scalability. Clusterize generates higher accuracy and oftentimes much larger clusters than Linclust, a fast linear time clustering algorithm. I demonstrate the utility of Clusterize by accurately solving different clustering problems involving millions of nucleotide or protein sequences.


Asunto(s)
Algoritmos , Secuencia de Aminoácidos , Análisis por Conglomerados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...